https://bugs.llvm.org/show_bug.cgi?id=43381
Bug ID: 43381
Summary: Suboptimal shift+mask code gen with BZHI
Product: libraries
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: [email protected]
Reporter: [email protected]
CC: [email protected], [email protected],
[email protected], [email protected]
The two C examples at the end of this report should generate the following
assembly when BZHI is available:
```
movb $60, %cl
bzhiq %rcx, (%rdi), %rax
shrq $23, %rax
```
In practice they generate:
```
movq (%rdi), %rax
shrq $23, %rax
movb $37, %cl
bzhiq %rcx, %rax, %rax
```
The example C code:
```
unsigned long example1(unsigned long *mem) {
unsigned long temp = *mem & ((1ul << 60) - 1);
return temp >> 23;
}
unsigned long example2(unsigned long *mem) {
unsigned long temp = *mem >> 23;
return temp & ((1ul << 37) - 1);
}
```
It seems to me that LLVM and/or the backend is biased towards "shift then mask"
but sometimes "mask then shift" generates better code.
--
You are receiving this mail because:
You are on the CC list for the bug._______________________________________________
llvm-bugs mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs