https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80770

Andrew Pinski <pinskia at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Severity|normal                      |enhancement
           Assignee|unassigned at gcc dot gnu.org      |pinskia at gcc dot 
gnu.org

--- Comment #2 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
Mine.

Though if we lower, we will still need to optimize the following on the gimple
level:
  _1 = BIT_FIELD_REF <_6, 1, 0>;
  _2 = ~_1;
  _8 = BIT_INSERT_EXPR <_6, _2, 0 (1 bits)>;

to _8 = _6 ^ 1;

Or in general:
BIT_INSERT_EXPR <_6, bit_not (BIT_FIELD_REF <_6, bits, shift>), shift (bits)>
to
_6 ^ shiftedmask(bits, shift);

And maybe add:
BIT_INSERT_EXPR <_6, bit_op (BIT_FIELD_REF <_6, bits, shift>, B), shift (bits)>

_6 bit_op (convert (convert:u B) << shift);
Where u is the unsigned type if B is not an unsigned type.

Reply via email to