https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91191

--- Comment #10 from Andrew Macleod <amacleod at redhat dot com> ---
Created attachment 62659
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=62659&action=edit
patch

OK, revisiting this.

It seems like we can simply treat VIEW_CONVERT the same as a cast can we not?

Its only properly defined when the precisions are the same.. so any
VIEW_CONVERT between integral values is basically the same as a cast, correct?

And if the LHS is LARGER than the RHS, its undefined, so we can give it the
same behaviour as a cast.. Shouldnt really matter if we zero or sign extend
correct?

and for a truncating cast, we are only picking up the lower X bits anyway, so
it still works.

It will not work for doubles or any other non-integral value... so I leave
those alone.

I have a patch which bootstraps and causes no regression, and fixes this
testcase.

It basically uses operator_cast if the operands are integral..   Does this seem
reasonable?

Reply via email to