On Thu, Jun 2, 2022 at 10:00 AM Jakub Jelinek wrote:
>
> Hi!
>
> As the following testcase shows, our x86 backend support for optimizing
> out useless masking of shift/rotate counts when using instructions
> that naturally modulo the count themselves is insufficient.
> The *_mask
Hi!
As the following testcase shows, our x86 backend support for optimizing
out useless masking of shift/rotate counts when using instructions
that naturally modulo the count themselves is insufficient.
The *_mask define_insn_and_split patterns use
(subreg:QI (and:SI (match_operand:SI)