https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #9 from Edmar Wienskoski ---
Ok. Thanks for the clarification.
The comparison is made with an unsigned variable, but gcc
is certain that this variable cannot have (legally) a value that
cannot be represented in an int.
That is why
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #8 from Jakub Jelinek ---
0x7fff * 0x7 is 0x3fff0001 and that is representable in int, so there is no
overflow.
0xb504 * 0xb504 is 0x7ffea810 and thus also representable in int, no overflow.
0xb505ULL * 0xb505ULL is 0x80001219ULL,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #7 from Andrew Pinski ---
(In reply to Edmar Wienskoski from comment #6)
> Hummm, You are almost convincing me, one last question,
> be patient with me.
>
> As Andrew posted:
> C = A * B
> should be equivalent to:
> C = (unsigned lon
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #6 from Edmar Wienskoski ---
Hummm, You are almost convincing me, one last question,
be patient with me.
As Andrew posted:
C = A * B
should be equivalent to:
C = (unsigned long)( ((int)A) * ((int)B) )
The variables are promoted *bef
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #5 from Jakub Jelinek ---
Even if the computation is 32-bit, by the time you multiply say (unsigned short
int) 0x with itself, you get undefined behavior.
So, as has been said, if you want to perform the multiplication in unsigned
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #4 from Edmar Wienskoski ---
I forgot that default on x86 is 64 bits.
Repeating the test with -m32 still shows the signed comparison.
Here:
#include
void main ()
{
unsigned short int A, B;
unsigned long C,D;
unsigned long E =
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
Jakub Jelinek changed:
What|Removed |Added
CC||jakub at gcc dot gnu.org
--- Comment #3
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
--- Comment #2 from Edmar Wienskoski ---
Right, but the variables A and B are *unsigned short*.
Both A, and B are promoted to signed int, but max value is 65535.
So, the result of A*B *can* be bigger than 31 bits.
Thanks
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69984
Andrew Pinski changed:
What|Removed |Added
Status|UNCONFIRMED |RESOLVED
Resolution|---