On Tue, 19 Dec 2006, Ian Lance Taylor wrote:
Here is a quick list of optimizations that mainline gcc performs which
rely on the idea that signed overflow is undefined. All the types
are, of course, signed. I made have made some mistakes. I think this
gives a good feel for the sorts of
Currently our documentation on -fwrapv is rather short and does not
provide examples or anything to provide such a feel:
This option instructs the compiler to assume that signed arithmetic
overflow of addition, subtraction and multiplication wraps around
using twos-complement
On 2006-12-29 00:55:18 -0800, Paul Eggert wrote:
[...]
Obviously this code is buggy, at least in theory, due to the signed
integer overflows. But rewriting it is not so easy, since we have no
INT_MAX to rescue us as we did in the bigtime_test loop. Here's what
I eventually came up with:
Vincent Lefevre [EMAIL PROTECTED] writes:
[...]
| Shouldn't GCC provide an extension to obtain the maximum and minimum
| values of integer types?
GCC already does. I suspect you meant a _generic_ way a la numeric_limits?
That is doable.
-- Gaby
Roberto Bagnara [EMAIL PROTECTED] writes:
My reading, instead, is that C99 requires unsigned long long int
to have exactly the same number of bits as long long int.
Yes, that's correct. Sorry, I got confused between C89
(which is what that Tandem NSK version supports) and C99.
Paolo Bonzini [EMAIL PROTECTED] writes:
Or you can do, since elsewhere in the code you compute time_t_max:
for (j = 1; j = time_t_max / 2 + 1; j *= 2)
No, this does not work. It would work to have:
for (j = 1;;)
{
if (j time_t_max / 2)
break;
j *= 2;
}
Robert Dewar wrote:
Valid programs is too narrow a set, you really do have to pay attention
to normal usage. I very well remember the Burroughs 5500 compiler, which
took advantage of the stack semantics allowed by the standard, but in
fact virtually all Fortran programs of the era assumed
On 2006-12-20 23:40:45 +0100, Marcin Dalecki wrote:
However it's a quite common mistake to forget how
bad floats model real numbers.
It depends on what you are doing. For instance, thanks to the IEEE-754
standard, it is possible to perform exact computations with floats. By
doing unsafe
On 2006-12-21 17:42:15 -0500, Robert Dewar wrote:
Marcin Dalecki wrote:
Of course I didn't think about a substitute for ==. Not! However I think
that checks for |x-y| epsilion, could be really given a significant
speed edge
if done in a single go in hardware.
One thing to ponder here
On 2006-12-19 10:44:25 -0800, Paul Eggert wrote:
Sure, but that is trickier. In many cases code operates on
types like time_t that are signed on some platforms and
unsigned on others. It's easy for such code to test for
overflow if you assume wraparound arithmetic, as code like
{ sum = a +
On Fri, Dec 22, 2006 at 01:58:39AM +0100, Denis Vlasenko wrote:
Or this, absolutely typical C code. i386 arch can compare
16 bits at a time here (luckily, no alighment worries on this arch):
# cat tt.c
int f(char *p)
{
if (p[0] == 1 p[1] == 2) return 1;
return 0;
}
No,
On Saturday 23 December 2006 10:06, Rask Ingemann Lambertsen wrote:
No, because you'd read past the end of the array:
#include stdlib.h
int main (int argc, char *argv[])
{
char *a;
if ((a == malloc (sizeof (char
{
int r;
a[0] = 1;
r = f (a);
On Sat, Dec 23, 2006 at 10:06:54AM +0100, Rask Ingemann Lambertsen wrote:
a[0] = 1;
Oops, that should be a[0] = 0 or any other value than 1.
--
Rask Ingemann Lambertsen
int j;
for (j = 1; 0 j; j *= 2)
if (! bigtime_test (j))
return 1;
Here it is obvious to a programmer that the comparison is
intended to do overflow checking, even though the test
controls the loop.
Well, it's not to me. :-)
Another question for the GCC
Paul Eggert wrote:
Roberto Bagnara [EMAIL PROTECTED] writes:
(The platform I'm thinking of is Tandem NSK/OSS.)
Is this correct? Doesn't C99's 6.2.5#6 mandate that...
This is straying from the subject of GCC and into the
problems of writing portable C code, but since you asked
The
Or you can do, since elsewhere in the code you compute time_t_max:
for (j = 1; j = time_t_max / 2 + 1; j *= 2)
No, this does not work. It would work to have:
for (j = 1;;)
{
if (j time_t_max / 2)
break;
j *= 2;
}
Oops.
Paolo
On 22 December 2006 00:59, Denis Vlasenko wrote:
Or this, absolutely typical C code. i386 arch can compare
16 bits at a time here (luckily, no alighment worries on this arch):
Whaddaya mean, no alignment worries? Misaligned accesses *kill* your
performance!
I know this doesn't affect
On Fri, 2006-12-22 at 17:08 +, Dave Korn wrote:
Misaligned accesses *kill* your performance!
Maybe on x86, but on PPC, at least for the (current) Cell's PPU
misaligned accesses for most cases unaligned are optimal.
Thanks,
Andrew Pinski
Dave Korn wrote:
On 22 December 2006 00:59, Denis Vlasenko wrote:
Or this, absolutely typical C code. i386 arch can compare
16 bits at a time here (luckily, no alighment worries on this arch):
Whaddaya mean, no alignment worries? Misaligned accesses *kill* your
performance!
is it
Andrew Pinski wrote:
On Fri, 2006-12-22 at 17:08 +, Dave Korn wrote:
Misaligned accesses *kill* your performance!
Maybe on x86, but on PPC, at least for the (current) Cell's PPU
misaligned accesses for most cases unaligned are optimal.
is that true across cache boundaries?
Thanks,
On Fri, 2006-12-22 at 12:30 -0500, Robert Dewar wrote:
Maybe on x86, but on PPC, at least for the (current) Cell's PPU
misaligned accesses for most cases unaligned are optimal.
is that true across cache boundaries?
For Cell, crossing the 32byte boundary causes the microcode to happen.
On Friday 22 December 2006 03:03, Paul Brook wrote:
On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise operations, for one
This overflow attitude has some resemblance to the attitude that
resulted in the Y2K issues. I don't try to troll, I have a detailed
explanation below.
Some time ago (a year?) I was told on this mailing-list that code
breakage due to undefinedness of signed overflow is not too common (I at
Some time ago (a year?) I was told on this mailing-list that code
breakage due to undefinedness of signed overflow is not too common (I at
least claimed with no evidence that it was more than one bug per 1,000
lines). My claim was counterclaimed by something like most of the time
people work
foo-bar = make_a_bar();
foo-bar-none = value;
being rendered as:
call make_a_bar
foo-bar-none = value
foo-bar = result of make_a_bar()
You are not describing a C compiler.
Um, I'm describing what gcc did?
I think he meant
x = make_a_bar ();
x-none = value;
foo-bar = x;
I don't know if
Paolo Bonzini wrote:
foo-bar = make_a_bar();
foo-bar-none = value;
being rendered as:
call make_a_bar
foo-bar-none = value
foo-bar = result of make_a_bar()
You are not describing a C compiler.
Um, I'm describing what gcc did?
I think he meant
x = make_a_bar ();
x-none = value;
foo-bar
Maybe he forgot the delicate details? The issue may happen if this
example was incomplete (my completion may need some tweaking to make
it more realistic):
#define make_a_bar(ppInstance)
*(unsigned**)(ppInstance)=make_a_uint(sizeof(struct bar))
make_a_bar(foo-bar);
Paolo Bonzini wrote:
Some time ago (a year?) I was told on this mailing-list that code
breakage due to undefinedness of signed overflow is not too common (I
at least claimed with no evidence that it was more than one bug per
1,000 lines). My claim was counterclaimed by something like most of
Paolo Bonzini [EMAIL PROTECTED] writes:
On the autoconf mailing list, Paul Eggert mentioned as a good
compromise that GCC could treat signed overflow as undefined only for
loops and not in general.
What I meant to propose (and perhaps did not propose clearly
enough) is that if the C
On Thu, 21 Dec 2006, Paul Eggert wrote:
But because bigtime_test wants an int, this causes the test
program to compute the equivalent of (int) ((unsigned int)
INT_MAX + 1), and C99 says that if you cannot assume
wrapping semantics this expression has undefined behavior in
the common case
Marcin Dalecki wrote:
But the same applies to floating point numbers. There, the situation
is even better, because nowadays I can rely on a float or double
being the representation defined in IEEE 754 because there is such
overwhelming hardware support.
You better don't. Really! Please
On 12/20/06, Marcin Dalecki [EMAIL PROTECTED] wrote:
You better don't. Really! Please just realize for example the impact
of the (in)famous 80 bit internal (over)precision of a
very common IEEE 754 implementation...
volatile float b = 1.;
if (1. / 3. == b / 3.) {
printf(HALLO!\n)
} else {
On 2006-12-21, at 22:19, David Nicol wrote:
It has always seemed to me that floating point comparison could
be standardized to regularize the exponent and ignore the least
significant
few bits and doing so would save a lot of headaches.
Well actually it wouldn't save the world. However
Joseph S. Myers [EMAIL PROTECTED] writes:
Conversion of out-of-range integers to signed types is
implementation-defined not undefined,
Thanks for the correction; I keep forgetting that. However,
a conforming implementation is allowed to raise a signal for
those conversions, which could break
On 2006-12-21, at 22:19, David Nicol wrote:
It has always seemed to me that floating point comparison could
be standardized to regularize the exponent and ignore the least
significant
few bits and doing so would save a lot of headaches.
This would be a real nuisance. This myth that you
On 2006-12-21, at 23:17, Robert Dewar wrote:
Marcin Dalecki:
Well actually it wouldn't save the world. However adding an
op-code implementing: x eqeps y = |x - y| epsilion, would be
indeed helpful.
Maybe some m-f has already patented it, and that's the reason we
don't see it
already
Marcin Dalecki wrote:
On 2006-12-21, at 23:17, Robert Dewar wrote:
Marcin Dalecki:
Well actually it wouldn't save the world. However adding an
op-code implementing: x eqeps y = |x - y| epsilion, would be
indeed helpful.
Maybe some m-f has already patented it, and that's the reason we
Marcin Dalecki wrote:
Of course I didn't think about a substitute for ==. Not! However I think
that checks for |x-y| epsilion, could be really given a significant
speed edge
if done in a single go in hardware.
One thing to ponder here is that thinks like this are what lead
to CISC
On 2006-12-21, at 23:42, Robert Dewar wrote:
Marcin Dalecki wrote:
Of course I didn't think about a substitute for ==. Not! However I
think
that checks for |x-y| epsilion, could be really given a
significant speed edge
if done in a single go in hardware.
One thing to ponder here is
Paul Eggert [EMAIL PROTECTED] writes:
That probably sounds vague, so here's the code that beta
gcc -O2 actually broke (which started this whole thread):
int j;
for (j = 1; 0 j; j *= 2)
if (! bigtime_test (j))
return 1;
It's interesting to note that in gcc 4.1
On Thu, 21 Dec 2006, Ian Lance Taylor wrote:
Another question for the GCC experts: would it fix the bug
if we replaced j *= 2 with j = 1 in this sample code?
Well, mainline VRP isn't clever enough to understand that case. But
it doesn't make the code any more defined. A left shift of a
Ian Lance Taylor [EMAIL PROTECTED] writes:
We could disable VRP's assumptions about signed overflow. I don't
know what else we could do to fix this case. I don't even know how we
could issue a useful warning. Perhaps there is a way.
It is a knotty problem. Thanks for thinking about it.
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise operations, for one
Or this, absolutely typical C code. i386 arch can compare
16 bits at a time here (luckily, no alighment worries on
On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise operations, for one
Or this, absolutely typical C code. i386 arch can compare
Paul Brook wrote:
On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise operations, for one
Or this, absolutely typical C code. i386
Paul Eggert [EMAIL PROTECTED] writes:
| Ian Lance Taylor [EMAIL PROTECTED] writes:
|
| We could disable VRP's assumptions about signed overflow. I don't
| know what else we could do to fix this case. I don't even know how we
| could issue a useful warning. Perhaps there is a way.
|
| It
On Friday 22 December 2006 02:06, Robert Dewar wrote:
Paul Brook wrote:
On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise
Paul Brook wrote:
Who says the optimisation is valid? The language standard?
The example was given as something that's 100% safe to optimize. I'm
disagreeing with that assertion. The use I describe isn't that unlikely if
the code was written by someone with poor knowledge of C.
My point is
Paul Eggert wrote:
Also, such an approach assumes that unsigned long long int
has at least as many bits as long long int. But this is an
unportable assumption; C99 does not require this. We have
run into hosts where the widest signed integer type has much
greater range than the widest
Roberto Bagnara [EMAIL PROTECTED] writes:
(The platform I'm thinking of is Tandem NSK/OSS.)
Is this correct? Doesn't C99's 6.2.5#6 mandate that...
This is straying from the subject of GCC and into the
problems of writing portable C code, but since you asked
The Tandem NSK/OSS
Paul Schlie wrote:
As a compromise, I'd vote that no optimizations may alter program behavior
in any way not explicitly diagnosed in each instance of their application.
Sounds reasonable, but it is impossible and impractical! And I think
anyone familiar with compiler technology and
Robert Dewar writes:
Paul Brook wrote:
As opposed to a buggy program with wilful disregard for signed overflow
semantics? ;-)
I know there is a smiley there, but in fact I think it is useful to
distinguish these two cases.
This is, I think, a very interesting comment. I've
Andrew Haley wrote:
Is it simply that one error is likely to be more common than another?
Or is there some more fundamental reason?
I think it is more fundamental. Yes, of course any optimization
will change resource utilization (space, time). An optimization
may well make a program larger,
On 2006-12-20, at 00:10, Richard B. Kreckel wrote:
C89 did not refer to IEEE 754 / IEC 60559. Yet, as far as I am aware,
-ffast-math or the implied optimizations have never been turned on
by GCC
unless explicitly requested. That was a wise decision.
By the same token it would be wise to
Hello,
Paul Brook wrote:
Compiler can optimize it any way it wants,
as long as result is the same as unoptimized one.
We have an option for that. It's called -O0.
Pretty much all optimization will change the behavior of your program.
Now that's a bit TOO strong a
Denis Vlasenko writes:
On Tuesday 19 December 2006 20:05, Andrew Haley wrote:
Denis Vlasenko writes:
I wrote this just a few days ago:
do {
int32_t v1 = v 1;
if (v 0) v1 ^= mask;
v = v1;
Zdenek Dvorak wrote:
actually, you do not even need (invalid) multithreaded programs to
realize that register allocation may change behavior of a program.
If the size of the stack is bounded, register allocation may
cause or prevent program from running out of stack, thus turning a
crashing
Dave Korn wrote:
On 20 December 2006 02:28, Andrew Pinski wrote:
Paul Brook wrote:
Pretty much all optimization will change the behavior of your program.
Now that's a bit TOO strong a statement, critical optimizations like
register allocation and instruction scheduling will generally not
On 20 December 2006 16:25, Matthew Woehlke wrote:
Dave Korn wrote:
On 20 December 2006 02:28, Andrew Pinski wrote:
Paul Brook wrote:
Pretty much all optimization will change the behavior of your program.
Now that's a bit TOO strong a statement, critical optimizations like
register
On 12/20/06, Dave Korn [EMAIL PROTECTED] wrote:
...
We (in a major, commercial application) ran into exactly this issue.
'asm volatile(lock orl $0,(%%esp)::)' is your friend when this happens
(it is a barrier across which neither the compiler nor CPU will reorder
things). Failing that, no-op
Dave Korn wrote:
Particularly lock-free queues whose correct
operation is critically dependent on the order in which the loads and
stores are performed.
No, absolutely not. Lock-free queues work by (for example) having a single
producer and a single consumer, storing the queue in a
Marcin Dalecki wrote:
Numerical stability of incomplete floating point representations are
an entirely different
problem category then some simple integer tricks. In the first case
the difficulties are inherent
to the incomplete representation of the calculation domain. In the
second case
Matthew Woehlke [EMAIL PROTECTED] writes:
That said, I've seen even stranger things, too. For example:
foo-bar = make_a_bar();
foo-bar-none = value;
being rendered as:
call make_a_bar
foo-bar-none = value
foo-bar = result of make_a_bar()
You are not describing a C compiler.
Andreas.
On 2006-12-20, at 22:48, Richard B. Kreckel wrote:
2) Signed types are not an algebra, they are not even a ring, at
least when their elements are interpreted in the canonical way as
integer numbers. (Heck, what are they?)
You are apparently using a different definition of an algebra or
On 12/20/06, Marcin Dalecki [EMAIL PROTECTED] wrote:
You are apparently using a different definition of an algebra or ring
than the common one.
Fascinating discussion. Pointers to canonical on-line definitions of
the terms algebra and ring as used in compiler design please?
Marcin Dalecki wrote:
On 2006-12-20, at 22:48, Richard B. Kreckel wrote:
2) Signed types are not an algebra, they are not even a ring, at
least when their elements are interpreted in the canonical way as
integer numbers. (Heck, what are they?)
You are apparently using a different
But the same applies to floating point numbers. There, the
situation is even better, because nowadays I can rely on a float or
double being the representation defined in IEEE 754 because there
is such overwhelming hardware support.
You better don't. Really! Please just realize for example
Paul Brook [EMAIL PROTECTED] writes:
| Compiler can optimize it any way it wants,
| as long as result is the same as unoptimized one.
|
| We have an option for that. It's called -O0.
|
| Pretty much all optimization will change the behavior of your program. The
| important distinction is
Dave Korn [EMAIL PROTECTED] writes:
[...]
| We (in a major, commercial application) ran into exactly this issue.
| 'asm volatile(lock orl $0,(%%esp)::)' is your friend when this happens
| (it is a barrier across which neither the compiler nor CPU will reorder
| things). Failing that, no-op
Andrew Haley [EMAIL PROTECTED] writes:
[...]
| C is no longer a kind of high-level assembly laguage:
| it's defined by a standard, in terms of an abstract machine, and some
| operations are not well-defined.
that does not mean C is not a kind of high-level assembly language.
:-/
-- Gaby
Matthew Woehlke [EMAIL PROTECTED] writes:
That said, I've seen even stranger things, too. For example:
foo-bar = make_a_bar();
foo-bar-none = value;
being rendered as:
call make_a_bar
foo-bar-none = value
foo-bar = result of make_a_bar()
That would obviously be a bug in the
On 20 December 2006 20:16, Seongbae Park wrote:
On 12/20/06, Dave Korn [EMAIL PROTECTED] wrote:
...
We (in a major, commercial application) ran into exactly this issue.
'asm volatile(lock orl $0,(%%esp)::)' is your friend when this happens
(it is a barrier across which neither the compiler
On Thursday 21 December 2006 02:38, Gabriel Dos Reis wrote:
Paul Brook [EMAIL PROTECTED] writes:
| Compiler can optimize it any way it wants,
| as long as result is the same as unoptimized one.
|
| We have an option for that. It's called -O0.
|
| Pretty much all optimization will change
On 20 December 2006 21:42, Matthew Woehlke wrote:
Dave Korn wrote:
Particularly lock-free queues whose correct
operation is critically dependent on the order in which the loads and
stores are performed.
No, absolutely not. Lock-free queues work by (for example) having a
single producer
On 21 December 2006 02:50, Gabriel Dos Reis wrote:
Andrew Haley [EMAIL PROTECTED] writes:
[...]
C is no longer a kind of high-level assembly laguage:
it's defined by a standard, in terms of an abstract machine, and some
operations are not well-defined.
that does not mean C is not a
Gabriel Dos Reis wrote:
I don't believe this particular issue of optimization based on
undefined behaviour can be resolved by just telling people hey
look, the C standard says it is undefined, therefore we can optimize.
And if you're not happy, just tell the compiler not to optimize.
For not
Brooks Moses wrote:
Now, if your argument is that following the LIA-1 standard will prevent
optimizations that could otherwise be made if one followed only the C
standard, that's a reasonable argument, but it should not be couched as
if it implies that preventing the optimizations would not
Robert Dewar writes:
Brooks Moses wrote:
Now, if your argument is that following the LIA-1 standard will
prevent optimizations that could otherwise be made if one
followed only the C standard, that's a reasonable argument, but
it should not be couched as if it implies that
Andrew Haley wrote:
Robert Dewar writes:
Brooks Moses wrote:
Now, if your argument is that following the LIA-1 standard will
prevent optimizations that could otherwise be made if one
followed only the C standard, that's a reasonable argument, but
it should not be couched as if
On Tue, 2006-12-19 at 03:42 -0500, Robert Dewar wrote:
When I worked on SPITBOL, people all the time were suggesting
optimizations in letters to the SPITBOL newsletter. I imposed
a rule saying that no one could propose an optimization unless
they could show ONE example program where the
Andrew Pinski wrote:
I don't have the number of times this shows up or how much it helps but
it does help out on being able to vectorize this loop.
Just to be clear, when I ask for quantitative data, it is precisely
data about how much it helps. It is always easy enough to show
cases where
Ralf Wildenhues [EMAIL PROTECTED] writes:
Maybe it's also just an unintended bug I happened to observe
(and take for given behavior)?
I read up a bit more and it looks like it is intended behavior.
However, this disruptive change isn't documented in
http://gcc.gnu.org/gcc-4.2/changes.html,
Robert Dewar writes:
Andrew Haley wrote:
We've already defined `-fwrapv' for people who need nonstandard
arithmetic.
Nonstandard implies that the result does not conform with the standard,
I don't think it does; it merely implies that any program which
requires -fwrapv for correct
Does the test hang forever?
No, the timeout works.
So the app builds. But it has latent bugs. Wonderful.
Is the performance gain by this change to gcc -O2 really worth all
this software engineering hassle and breakage for typical
applications? I'm talking about apps like 'date', 'touch',
Andrew Haley [EMAIL PROTECTED] writes:
| Robert Dewar writes:
| Andrew Haley wrote:
|
|We've already defined `-fwrapv' for people who need nonstandard
|arithmetic.
|
| Nonstandard implies that the result does not conform with the standard,
|
| I don't think it does; it merely
Gabriel Dos Reis writes:
Andrew Haley [EMAIL PROTECTED] writes:
| Robert Dewar writes:
| Andrew Haley wrote:
|
|We've already defined `-fwrapv' for people who need nonstandard
|arithmetic.
|
| Nonstandard implies that the result does not conform with the
* Andrew Pinski:
A simple loop like:
int foo ()
{
int a[N];
int i;
int n;
for (i = 0; i = n; i++)
ca[i] = 2;
}
we cannot find how many iterations it runs without knowing that signed
types overflow.
In this case, the assumption is not needed because the lack of
overflow
By the way, as I've tried to describe here:
http://cert.uni-stuttgart.de/advisories/c-integer-overflow.php
variable range tracking can result in reintroduction of
supposedly-fixed security vulnerabilities. 8-(
Interesting read. I agree with the proposed fix; however, note that GCC
does not
Andrew Haley [EMAIL PROTECTED] writes:
| Gabriel Dos Reis writes:
| Andrew Haley [EMAIL PROTECTED] writes:
|
| | Robert Dewar writes:
| | Andrew Haley wrote:
| |
| |We've already defined `-fwrapv' for people who need nonstandard
| |arithmetic.
| |
| |
Andrew Haley wrote:
Robert Dewar writes:
Andrew Haley wrote:
We've already defined `-fwrapv' for people who need nonstandard
arithmetic.
Nonstandard implies that the result does not conform with the standard,
I don't think it does; it merely implies that any program which
Gabriel Dos Reis wrote:
Andrew Haley [EMAIL PROTECTED] writes:
| Robert Dewar writes:
| Andrew Haley wrote:
|
|We've already defined `-fwrapv' for people who need nonstandard
|arithmetic.
|
| Nonstandard implies that the result does not conform with the standard,
|
| I don't
Andrew Haley wrote:
I suspect the actual argument must be somewhere else.
I'm sure it is. The only purpose of my mail was to clarify what I
meant by nonstandard, which in this case was not strictly
conforming. I didn't intend to imply anything else.
But a compiler that implements wrap
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too many existing programs
Is this documented somewhere? Without
Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too many existing programs
Is this documented
Hello,
Now, if your argument is that following the LIA-1 standard will prevent
optimizations that could otherwise be made if one followed only the C
standard, that's a reasonable argument, but it should not be couched as
if it implies that preventing the optimizations would not be
Zdenek Dvorak wrote:
IMHO, using loops relying on the behavior of overflow of an
induction variable (*) is an ugly hack and whoever writes such a code
does not deserve for his program to work.
I suspect everyone would agree on this, and in practice I would
guess that
a) there are no programs
On Tue, 19 Dec 2006, Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too many existing
Joseph S. Myers wrote:
On Tue, 19 Dec 2006, Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too
* Joseph S. Myers:
On Tue, 19 Dec 2006, Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
We've optimized expressions such as (a*2)/2 on the basis of overflow
being undefined for a very long time, not just loops.
What is (a*2)/2 optimized to? certainly it has the value a if you wrap,
so you are not necessarily depending on undefined here.
No, it has not. For example, if a is
1 - 100 of 131 matches
Mail list logo