https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113759

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Assignee|unassigned at gcc dot gnu.org      |jakub at gcc dot gnu.org
             Status|NEW                         |ASSIGNED

--- Comment #6 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Created attachment 57321
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=57321&action=edit
gcc14-pr113759.patch

Ugh, already the preexisting handling in convert_*_to_widen is just weird and
the new patch inconsistently changed just convert_mult_to_widen and not
convert_plusminus_to_widen.
In my limited understanding of the code, if actual_precision or from_unsignedN
are different from the properties of typeN, we want to cast, and similarly
we want to cast if TREE_TYPE (rhsN) is not uselessly convertible to typeN.
For INTEGER_CSTs, such casts should be done using fold_convert always,
otherwise build_and_insert_cast.

Reply via email to