https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113226

--- Comment #2 from Patrick Palka <ppalka at gcc dot gnu.org> ---
(In reply to Patrick Palka from comment #1)
> Huh, how bizarre.
> 
> > i == 1, j == -100, i*j == 4294967196, max_type(i) == 1, max_type(i)*j == 
> > -100
> 
> Here i and j are just ordinary 'long long', so I don't get why i*j is
> 4294967196 instead of -100?

Everything else, in particular that int64_t(max_type(i)*j) is -100, seems
correct/expected to me.  FWIW that expression computes the product of the
corresponding promoted/sign-extended 65-bit precision values, and the overall
check is analogous to

  int32_t i = 1, j = -100;
  assert (int64_t(i*j) == int64_t(i)*j);

except the two precisions are 64/65 bits instead of 32/64 bits.

(When shorten_p is true, the overall check is analogous to
 assert (i*j == int32_t(int64_t(i)*j)) instead.)

Reply via email to