[Lldb-commits] [flang] [lldb] [libcxx] [libc] [llvm] [clang] [libunwind] [clang-tools-extra] [compiler-rt] [lld] [X86] Use RORX over SHR imm (PR #77964)

2024-02-02 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/16] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890..238e8e9b6e97f 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl 

[Lldb-commits] [flang] [lldb] [libcxx] [libc] [llvm] [clang] [libunwind] [clang-tools-extra] [compiler-rt] [lld] [X86] Use RORX over SHR imm (PR #77964)

2024-02-02 Thread Bryce Wilson via lldb-commits

Bryce-MW wrote:

I spent some time trying out something much more complex: starting at the user 
of flags that has other inputs (ADC, SBB, CMOVcc are the main ones), trace back 
the non-flags inputs to see if the node producing the flags inputs is along 
their paths then check the path from there to the flags user for instructions 
that produce flags and check if they can be rewritten. This works, but I felt 
like it was too complicated, , isn't particularly efficient, and didn't seem to 
improve any code that I tested with.

I have some ideas for future PRs related to avoiding flags spilling so if I 
come up with a better way to do this kind of thing in the future, I can always 
come back to it.

https://github.com/llvm/llvm-project/pull/77964
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [libunwind] [llvm] [flang] [lldb] [libc] [clang] [lld] [compiler-rt] [libcxx] [clang-tools-extra] [X86] Use RORX over SHR imm (PR #77964)

2024-02-02 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW ready_for_review 
https://github.com/llvm/llvm-project/pull/77964
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [libc] [clang-tools-extra] [lldb] [flang] [compiler-rt] [libcxx] [llvm] [libunwind] [clang] [lld] [X86] Use RORX over SHR imm (PR #77964)

2024-01-28 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/16] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890cd..238e8e9b6e97f30 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (

[Lldb-commits] [libc] [clang-tools-extra] [lldb] [flang] [compiler-rt] [libcxx] [llvm] [libunwind] [clang] [lld] [X86] Use RORX over SHR imm (PR #77964)

2024-01-28 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/16] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890c..238e8e9b6e97f3 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (sr

[Lldb-commits] [compiler-rt] [libunwind] [clang] [llvm] [lld] [libc] [flang] [lldb] [clang-tools-extra] [libcxx] [X86] Use RORX over SHR imm (PR #77964)

2024-01-25 Thread Bryce Wilson via lldb-commits

Bryce-MW wrote:

I think the fail on Windows is not related. Hopefully a merge fixes it...

https://github.com/llvm/llvm-project/pull/77964
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [compiler-rt] [libunwind] [clang] [llvm] [lld] [libc] [flang] [lldb] [clang-tools-extra] [libcxx] [X86] Use RORX over SHR imm (PR #77964)

2024-01-25 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/15] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890cd..238e8e9b6e97f30 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (

[Lldb-commits] [clang] [flang] [libc] [libcxx] [clang-tools-extra] [lldb] [lld] [libunwind] [llvm] [compiler-rt] [X86] Use RORX over SHR imm (PR #77964)

2024-01-25 Thread Bryce Wilson via lldb-commits


@@ -4216,6 +4217,97 @@ MachineSDNode *X86DAGToDAGISel::emitPCMPESTR(unsigned 
ROpc, unsigned MOpc,
   return CNode;
 }
 
+// When the consumer of a right shift (arithmetic or logical) wouldn't notice
+// the difference if the instruction was a rotate right instead (because the
+// bits shifted in are truncated away), the shift can be replaced by the RORX
+// instruction from BMI2. This doesn't set flags and can output to a different
+// register. However, this increases code size in most cases, and doesn't leave
+// the high bits in a useful state. There may be other situations where this
+// transformation is profitable given those conditions, but currently the
+// transformation is only made when it likely avoids spilling flags.
+bool X86DAGToDAGISel::rightShiftUncloberFlags(SDNode *N) {
+  EVT VT = N->getValueType(0);
+
+  // Target has to have BMI2 for RORX
+  if (!Subtarget->hasBMI2())
+return false;
+
+  // Only handle scalar shifts.
+  if (VT.isVector())
+return false;
+
+  unsigned OpSize;
+  if (VT == MVT::i64)
+OpSize = 64;
+  else if (VT == MVT::i32)
+OpSize = 32;
+  else if (VT == MVT::i16)
+OpSize = 16;
+  else if (VT == MVT::i8)
+return false; // i8 shift can't be truncated.
+  else
+llvm_unreachable("Unexpected shift size");
+
+  unsigned TruncateSize = 0;
+  // This only works when the result is truncated.
+  for (const SDNode *User : N->uses()) {
+auto name = User->getOperationName(CurDAG);
+if (!User->isMachineOpcode() ||
+User->getMachineOpcode() != TargetOpcode::EXTRACT_SUBREG)
+  return false;
+EVT TuncateType = User->getValueType(0);
+if (TuncateType == MVT::i32)
+  TruncateSize = std::max(TruncateSize, 32U);
+else if (TuncateType == MVT::i16)
+  TruncateSize = std::max(TruncateSize, 16U);
+else if (TuncateType == MVT::i8)
+  TruncateSize = std::max(TruncateSize, 8U);
+else
+  return false;
+  }
+  if (TruncateSize >= OpSize)
+return false;
+
+  // The shift must be by an immediate that wouldn't expose the zero or sign
+  // extended result.
+  auto *ShiftAmount = dyn_cast(N->getOperand(1));
+  if (!ShiftAmount || ShiftAmount->getZExtValue() > OpSize - TruncateSize)
+return false;
+
+  // Only make the replacement when it avoids clobbering used flags. This is a
+  // similar heuristic as used in the conversion to LEA, namely looking at the
+  // operand for an instruction that creates flags where those flags are used.
+  // This will have both false positives and false negatives. Ideally, both of
+  // these happen later on. Perhaps in copy to flags lowering or in register
+  // allocation.
+  bool MightClobberFlags = false;
+  SDNode *Input = N->getOperand(0).getNode();
+  for (auto Use : Input->uses()) {
+if (Use->getOpcode() == ISD::CopyToReg) {
+  auto *RegisterNode =
+  dyn_cast(Use->getOperand(1).getNode());
+  if (RegisterNode && RegisterNode->getReg() == X86::EFLAGS) {
+MightClobberFlags = true;
+break;
+  }
+}
+  }
+  if (!MightClobberFlags)
+return false;

Bryce-MW wrote:

It should be correct? I've clarified the names / explanation a bit but it's 
possible that I got the logic wrong

https://github.com/llvm/llvm-project/pull/77964
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [clang] [flang] [libc] [libcxx] [clang-tools-extra] [lldb] [lld] [libunwind] [llvm] [compiler-rt] [X86] Use RORX over SHR imm (PR #77964)

2024-01-25 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/15] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890cd..238e8e9b6e97f30 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (

[Lldb-commits] [flang] [libunwind] [llvm] [clang] [lld] [lldb] [libc] [clang-tools-extra] [compiler-rt] [libcxx] [X86] Use RORX over SHR imm (PR #77964)

2024-01-24 Thread Bryce Wilson via lldb-commits

https://github.com/Bryce-MW updated 
https://github.com/llvm/llvm-project/pull/77964

>From d4c312b9dbf447d0a53dda0e6cdc482bd908430b Mon Sep 17 00:00:00 2001
From: Bryce Wilson 
Date: Fri, 12 Jan 2024 16:01:32 -0600
Subject: [PATCH 01/14] [X86] Use RORX over SHR imm

---
 llvm/lib/Target/X86/X86InstrShiftRotate.td |  78 ++
 llvm/test/CodeGen/X86/atomic-unordered.ll  |   3 +-
 llvm/test/CodeGen/X86/bmi2.ll  |   6 +-
 llvm/test/CodeGen/X86/cmp-shiftX-maskX.ll  |   3 +-
 llvm/test/CodeGen/X86/pr35636.ll   |   4 +-
 llvm/test/CodeGen/X86/vector-trunc-ssat.ll | 116 ++---
 6 files changed, 143 insertions(+), 67 deletions(-)

diff --git a/llvm/lib/Target/X86/X86InstrShiftRotate.td 
b/llvm/lib/Target/X86/X86InstrShiftRotate.td
index f951894db1890cd..238e8e9b6e97f30 100644
--- a/llvm/lib/Target/X86/X86InstrShiftRotate.td
+++ b/llvm/lib/Target/X86/X86InstrShiftRotate.td
@@ -879,6 +879,26 @@ let Predicates = [HasBMI2, HasEGPR, In64BitMode] in {
   defm SHLX64 : bmi_shift<"shlx{q}", GR64, i64mem, "_EVEX">, T8, PD, REX_W, 
EVEX;
 }
 
+
+def immle16_8 : ImmLeaf;
+def immle32_8 : ImmLeaf;
+def immle64_8 : ImmLeaf;
+def immle32_16 : ImmLeaf;
+def immle64_16 : ImmLeaf;
+def immle64_32 : ImmLeaf;
+
 let Predicates = [HasBMI2] in {
   // Prefer RORX which is non-destructive and doesn't update EFLAGS.
   let AddedComplexity = 10 in {
@@ -891,6 +911,64 @@ let Predicates = [HasBMI2] in {
   (RORX32ri GR32:$src, (ROT32L2R_imm8 imm:$shamt))>;
 def : Pat<(rotl GR64:$src, (i8 imm:$shamt)),
   (RORX64ri GR64:$src, (ROT64L2R_imm8 imm:$shamt))>;
+
+// A right shift by less than a smaller register size that is then
+// truncated to that register size can be replaced by RORX to
+// preserve flags with the same execution cost
+
+def : Pat<(i8 (trunc (srl GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR16:$src, (i8 immle16_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri (INSERT_SUBREG (i32 (IMPLICIT_DEF)), 
GR16:$src, sub_16bit), imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR32:$src, (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra GR64:$src, (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR32:$src, (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32ri GR32:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra GR64:$src, (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (srl GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+def : Pat<(i32 (trunc (sra GR64:$src, (i8 immle64_32:$shamt,
+  (EXTRACT_SUBREG (RORX64ri GR64:$src, imm:$shamt), sub_32bit)>;
+
+
+// Can't expand the load
+def : Pat<(i8 (trunc (srl (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi32 addr:$src), (i8 immle32_8:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (srl (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+def : Pat<(i8 (trunc (sra (loadi64 addr:$src), (i8 immle64_8:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_8bit)>;
+
+
+def : Pat<(i16 (trunc (srl (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi32 addr:$src), (i8 immle32_16:$shamt,
+  (EXTRACT_SUBREG (RORX32mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (srl (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+def : Pat<(i16 (trunc (sra (loadi64 addr:$src), (i8 immle64_16:$shamt,
+  (EXTRACT_SUBREG (RORX64mi addr:$src, imm:$shamt), sub_16bit)>;
+
+def : Pat<(i32 (trunc (