llvmbot wrote:

<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-backend-risc-v
@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-clang

Author: Nikita Popov (nikic)

<details>
<summary>Changes</summary>

This patch canonicalizes getelementptr instructions with constant indices to 
use the `i8` source element type. This makes it easier for optimizations to 
recognize that two GEPs are identical, because they don't need to see past many 
different ways to express the same offset.

This is a first step towards 
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699. 
This is limited to constant GEPs only for now, as they have a clear canonical 
form, while we're not yet sure how exactly to deal with variable indices.

The test llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll gives two 
representative examples of the kind of optimization improvement we expect from 
this change. In the first test SimplifyCFG can now realize that all switch 
branches are actually the same. In the second test it can convert it into 
simple arithmetic. These are representative of common enum optimization 
failures we see in Rust.

---

Patch is 775.99 KiB, truncated to 20.00 KiB below, full version: 
https://github.com/llvm/llvm-project/pull/68882.diff


174 Files Affected:

- (modified) clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c (+5-5) 
- (modified) clang/test/CodeGen/aarch64-ls64-inline-asm.c (+9-9) 
- (modified) clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c (+24-24) 
- (modified) clang/test/CodeGen/attr-riscv-rvv-vector-bits-bitcast.c (+12-12) 
- (modified) clang/test/CodeGen/cleanup-destslot-simple.c (+2-2) 
- (modified) clang/test/CodeGen/hexagon-brev-ld-ptr-incdec.c (+3-3) 
- (modified) clang/test/CodeGen/ms-intrinsics.c (+6-6) 
- (modified) clang/test/CodeGen/nofpclass.c (+4-4) 
- (modified) clang/test/CodeGen/union-tbaa1.c (+2-2) 
- (modified) clang/test/CodeGenCXX/RelativeVTablesABI/dynamic-cast.cpp (+1-1) 
- (modified) clang/test/CodeGenCXX/RelativeVTablesABI/type-info.cpp (+1-3) 
- (modified) clang/test/CodeGenCXX/microsoft-abi-dynamic-cast.cpp (+6-6) 
- (modified) clang/test/CodeGenCXX/microsoft-abi-typeid.cpp (+1-1) 
- (modified) clang/test/CodeGenObjC/arc-foreach.m (+2-2) 
- (modified) clang/test/CodeGenObjCXX/arc-cxx11-init-list.mm (+1-1) 
- (modified) clang/test/Headers/__clang_hip_math.hip (+12-12) 
- (modified) clang/test/OpenMP/bug57757.cpp (+6-6) 
- (modified) llvm/lib/Transforms/InstCombine/InstructionCombining.cpp (+9) 
- (modified) llvm/test/Analysis/BasicAA/featuretest.ll (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/vector-alloca-bitcast.ll (+6-6) 
- (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-inline.ll (+2-2) 
- (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-unroll-inline.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-unroll.ll (+4-4) 
- (modified) 
llvm/test/CodeGen/BPF/preserve-static-offset/store-unroll-inline.ll (+2-2) 
- (modified) llvm/test/CodeGen/Hexagon/autohvx/vector-align-tbaa.ll (+27-27) 
- (modified) llvm/test/Transforms/Coroutines/coro-async.ll (+1-1) 
- (modified) llvm/test/Transforms/Coroutines/coro-retcon-alloca-opaque-ptr.ll 
(+1-1) 
- (modified) llvm/test/Transforms/Coroutines/coro-retcon-alloca.ll (+1-1) 
- (modified) llvm/test/Transforms/Coroutines/coro-retcon-once-value.ll (+3-3) 
- (modified) llvm/test/Transforms/Coroutines/coro-retcon-resume-values.ll 
(+4-4) 
- (modified) llvm/test/Transforms/Coroutines/coro-swifterror.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/2007-03-25-BadShiftMask.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/2009-01-08-AlignAlloca.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/2009-02-20-InstCombine-SROA.ll 
(+16-16) 
- (modified) llvm/test/Transforms/InstCombine/X86/x86-addsub-inseltpoison.ll 
(+1-1) 
- (modified) llvm/test/Transforms/InstCombine/X86/x86-addsub.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/add3.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/array.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/assume.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/cast_phi.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/catchswitch-phi.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/compare-alloca.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/extractvalue.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/gep-addrspace.ll (+1-1) 
- (modified) 
llvm/test/Transforms/InstCombine/gep-canonicalize-constant-indices.ll (+9-9) 
- (modified) llvm/test/Transforms/InstCombine/gep-combine-loop-invariant.ll 
(+3-3) 
- (modified) llvm/test/Transforms/InstCombine/gep-custom-dl.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/gep-merge-constant-indices.ll 
(+7-7) 
- (modified) llvm/test/Transforms/InstCombine/gep-vector-indices.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/gep-vector.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/gepphigep.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/getelementptr.ll (+25-24) 
- (modified) llvm/test/Transforms/InstCombine/icmp-custom-dl.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/icmp-gep.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/indexed-gep-compares.ll (+8-8) 
- (modified) llvm/test/Transforms/InstCombine/intptr1.ll (+10-10) 
- (modified) llvm/test/Transforms/InstCombine/intptr2.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/intptr3.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/intptr4.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/intptr5.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/intptr7.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/load-store-forward.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/load.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/loadstore-metadata.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/memchr-5.ll (+24-24) 
- (modified) llvm/test/Transforms/InstCombine/memchr-9.ll (+23-23) 
- (modified) llvm/test/Transforms/InstCombine/memcmp-3.ll (+28-28) 
- (modified) llvm/test/Transforms/InstCombine/memcmp-4.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/memcmp-5.ll (+13-13) 
- (modified) llvm/test/Transforms/InstCombine/memcmp-6.ll (+6-6) 
- (modified) llvm/test/Transforms/InstCombine/memcmp-7.ll (+11-11) 
- (modified) llvm/test/Transforms/InstCombine/memcpy_alloca.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/memrchr-5.ll (+32-32) 
- (modified) llvm/test/Transforms/InstCombine/memset2.ll (+1-1) 
- (modified) 
llvm/test/Transforms/InstCombine/multi-size-address-space-pointer.ll (+7-7) 
- (modified) llvm/test/Transforms/InstCombine/non-integral-pointers.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/opaque-ptr.ll (+20-22) 
- (modified) llvm/test/Transforms/InstCombine/phi-equal-incoming-pointers.ll 
(+1-1) 
- (modified) llvm/test/Transforms/InstCombine/phi-timeout.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/phi.ll (+2-2) 
- (modified) llvm/test/Transforms/InstCombine/pr39908.ll (+3-3) 
- (modified) llvm/test/Transforms/InstCombine/pr44242.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/pr58901.ll (+4-3) 
- (modified) llvm/test/Transforms/InstCombine/ptr-replace-alloca.ll (+5-5) 
- (modified) llvm/test/Transforms/InstCombine/select-cmp-br.ll (+8-8) 
- (modified) llvm/test/Transforms/InstCombine/select-gep.ll (+8-8) 
- (modified) llvm/test/Transforms/InstCombine/shift.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/sink_sideeffecting_instruction.ll 
(+1-1) 
- (modified) llvm/test/Transforms/InstCombine/sprintf-2.ll (+8-8) 
- (modified) llvm/test/Transforms/InstCombine/statepoint-cleanup.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/str-int-3.ll (+23-23) 
- (modified) llvm/test/Transforms/InstCombine/str-int-4.ll (+34-34) 
- (modified) llvm/test/Transforms/InstCombine/str-int-5.ll (+27-27) 
- (modified) llvm/test/Transforms/InstCombine/str-int.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/strcall-bad-sig.ll (+4-4) 
- (modified) llvm/test/Transforms/InstCombine/strcall-no-nul.ll (+13-13) 
- (modified) llvm/test/Transforms/InstCombine/strlen-7.ll (+20-20) 
- (modified) llvm/test/Transforms/InstCombine/strlen-9.ll (+6-6) 
- (modified) llvm/test/Transforms/InstCombine/strncmp-4.ll (+14-14) 
- (modified) llvm/test/Transforms/InstCombine/strncmp-5.ll (+21-21) 
- (modified) llvm/test/Transforms/InstCombine/strncmp-6.ll (+6-6) 
- (modified) llvm/test/Transforms/InstCombine/sub.ll (+3-3) 
- (modified) llvm/test/Transforms/InstCombine/unpack-fca.ll (+27-27) 
- (modified) llvm/test/Transforms/InstCombine/vec_demanded_elts-inseltpoison.ll 
(+2-2) 
- (modified) llvm/test/Transforms/InstCombine/vec_demanded_elts.ll (+2-2) 
- (modified) 
llvm/test/Transforms/InstCombine/vec_gep_scalar_arg-inseltpoison.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/vec_gep_scalar_arg.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/vscale_gep.ll (+1-1) 
- (modified) llvm/test/Transforms/InstCombine/wcslen-5.ll (+1-1) 
- (modified) llvm/test/Transforms/LoopUnroll/ARM/upperbound.ll (+2-2) 
- (modified) llvm/test/Transforms/LoopUnroll/peel-loop.ll (+8-8) 
- (modified) 
llvm/test/Transforms/LoopVectorize/AArch64/deterministic-type-shrinkage.ll 
(+1-1) 
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/intrinsiccost.ll (+4-4) 
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-cond-inv-loads.ll 
(+2-2) 
- (modified) 
llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-accesses.ll (+8-8) 
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-widen-phi.ll (+6-6) 
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/vector-reverse-mask4.ll 
(+4-4) 
- (modified) llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll (+22-22) 
- (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-qabs.ll (+9-9) 
- (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-reductions.ll (+1-1) 
- (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-selectandorcost.ll 
(+3-3) 
- (modified) llvm/test/Transforms/LoopVectorize/ARM/pointer_iv.ll (+24-24) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/float-induction-x86.ll 
(+9-9) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/interleaving.ll (+7-7) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/intrinsiccost.ll (+8-8) 
- (modified) 
llvm/test/Transforms/LoopVectorize/X86/invariant-store-vectorization.ll (+3-3) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/metadata-enable.ll 
(+412-412) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/pr23997.ll (+6-6) 
- (modified) llvm/test/Transforms/LoopVectorize/X86/small-size.ll (+4-4) 
- (modified) 
llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-store-accesses-with-gaps.ll
 (+2-2) 
- (modified) llvm/test/Transforms/LoopVectorize/consecutive-ptr-uniforms.ll 
(+2-2) 
- (modified) llvm/test/Transforms/LoopVectorize/extract-last-veclane.ll (+1-1) 
- (modified) llvm/test/Transforms/LoopVectorize/float-induction.ll (+8-8) 
- (modified) llvm/test/Transforms/LoopVectorize/induction.ll (+26-26) 
- (modified) llvm/test/Transforms/LoopVectorize/interleaved-accesses.ll 
(+10-10) 
- (modified) llvm/test/Transforms/LoopVectorize/reduction-inloop-uf4.ll (+3-3) 
- (modified) llvm/test/Transforms/LoopVectorize/runtime-check.ll (+1-1) 
- (modified) llvm/test/Transforms/LoopVectorize/scalar_after_vectorization.ll 
(+1-1) 
- (modified) llvm/test/Transforms/LoopVectorize/vector-geps.ll (+7-7) 
- (modified) 
llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-dominance.ll (+56-56) 
- (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-loops.ll 
(+12-12) 
- (modified) 
llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-multiple-blocks.ll 
(+36-36) 
- (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused.ll 
(+67-67) 
- (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-minimal.ll 
(+5-5) 
- (modified) 
llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-sinking-required-for-vectorization.ll
 (+13-13) 
- (modified) 
llvm/test/Transforms/PhaseOrdering/AArch64/peel-multiple-unreachable-exits-for-vectorization.ll
 (+11-11) 
- (modified) llvm/test/Transforms/PhaseOrdering/AArch64/quant_4x4.ll (+48-48) 
- (modified) 
llvm/test/Transforms/PhaseOrdering/AArch64/sinking-vs-if-conversion.ll (+4-4) 
- (modified) llvm/test/Transforms/PhaseOrdering/ARM/arm_mult_q15.ll (+6-6) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/excessive-unrolling.ll 
(+3-3) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/hoist-load-of-baseptr.ll 
(+5-5) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/pixel-splat.ll (+1-1) 
- (modified) 
llvm/test/Transforms/PhaseOrdering/X86/pr48844-br-to-switch-vectorization.ll 
(+1-1) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/pr50555.ll (+2-2) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/speculation-vs-tbaa.ll 
(+1-1) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/spurious-peeling.ll (+6-6) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/vdiv.ll (+6-6) 
- (modified) llvm/test/Transforms/PhaseOrdering/X86/vec-shift.ll (+4-4) 
- (modified) llvm/test/Transforms/PhaseOrdering/basic.ll (+4-4) 
- (modified) llvm/test/Transforms/PhaseOrdering/loop-access-checks.ll (+3-3) 
- (modified) llvm/test/Transforms/PhaseOrdering/pr39282.ll (+2-2) 
- (modified) llvm/test/Transforms/PhaseOrdering/simplifycfg-options.ll (+1-1) 
- (modified) llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll (+4-52) 
- (modified) llvm/test/Transforms/SLPVectorizer/AArch64/gather-cost.ll (+6-6) 
- (modified) llvm/test/Transforms/SLPVectorizer/AArch64/gather-reduce.ll (+4-4) 
- (modified) llvm/test/Transforms/SLPVectorizer/AArch64/loadorder.ll (+16-16) 
- (modified) 
llvm/test/Transforms/SLPVectorizer/WebAssembly/no-vectorize-rotate.ll (+1-1) 
- (modified) llvm/test/Transforms/SLPVectorizer/X86/operandorder.ll (+2-2) 
- (modified) llvm/test/Transforms/SLPVectorizer/X86/opt.ll (+7-7) 
- (modified) llvm/test/Transforms/SLPVectorizer/X86/pr46983.ll (+16-16) 
- (modified) llvm/test/Transforms/SLPVectorizer/X86/pr47629-inseltpoison.ll 
(+136-136) 
- (modified) llvm/test/Transforms/SLPVectorizer/X86/pr47629.ll (+136-136) 
- (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-instcombine.ll 
(+5-5) 
- (modified) llvm/test/Transforms/Util/strip-gc-relocates.ll (+2-2) 


``````````diff
diff --git a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c 
b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
index 3922513e22469a..5422d993ff1575 100644
--- a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
+++ b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
@@ -25,13 +25,13 @@ void test1(unsigned char *vqp, unsigned char *vpp, vector 
unsigned char vc, unsi
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x 
i8>, <16 x i8> } [[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x 
i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr 
[[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 
16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    [[TMP5:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x 
i8>, <16 x i8> } [[TMP1]], 2
-// CHECK-NEXT:    [[TMP6:%.*]] = getelementptr inbounds <16 x i8>, ptr 
[[RESP]], i64 2
+// CHECK-NEXT:    [[TMP6:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 
32
 // CHECK-NEXT:    store <16 x i8> [[TMP5]], ptr [[TMP6]], align 16
 // CHECK-NEXT:    [[TMP7:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x 
i8>, <16 x i8> } [[TMP1]], 3
-// CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds <16 x i8>, ptr 
[[RESP]], i64 3
+// CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 
48
 // CHECK-NEXT:    store <16 x i8> [[TMP7]], ptr [[TMP8]], align 16
 // CHECK-NEXT:    ret void
 //
@@ -60,7 +60,7 @@ void test3(unsigned char *vqp, unsigned char *vpp, vector 
unsigned char vc, unsi
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } 
[[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } 
[[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr 
[[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 
16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    ret void
 //
@@ -1072,7 +1072,7 @@ void test76(unsigned char *vqp, unsigned char *vpp, 
vector unsigned char vc, uns
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } 
[[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } 
[[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr 
[[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 
16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    ret void
 //
diff --git a/clang/test/CodeGen/aarch64-ls64-inline-asm.c 
b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
index ac2dbe1fa1b31a..744d6919b05ee4 100644
--- a/clang/test/CodeGen/aarch64-ls64-inline-asm.c
+++ b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
@@ -16,8 +16,8 @@ void load(struct foo *output, void *addr)
 
 // CHECK-LABEL: @store(
 // CHECK-NEXT:  entry:
-// CHECK-NEXT:    [[TMP1:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
-// CHECK-NEXT:    tail call void asm sideeffect "st64b $0,[$1]", 
"r,r,~{memory}"(i512 [[TMP1]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
+// CHECK-NEXT:    [[TMP0:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
+// CHECK-NEXT:    tail call void asm sideeffect "st64b $0,[$1]", 
"r,r,~{memory}"(i512 [[TMP0]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
 // CHECK-NEXT:    ret void
 //
 void store(const struct foo *input, void *addr)
@@ -29,25 +29,25 @@ void store(const struct foo *input, void *addr)
 // CHECK-NEXT:  entry:
 // CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[IN:%.*]], align 4, !tbaa 
[[TBAA4:![0-9]+]]
 // CHECK-NEXT:    [[CONV:%.*]] = sext i32 [[TMP0]] to i64
-// CHECK-NEXT:    [[ARRAYIDX1:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 1
+// CHECK-NEXT:    [[ARRAYIDX1:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 4
 // CHECK-NEXT:    [[TMP1:%.*]] = load i32, ptr [[ARRAYIDX1]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV2:%.*]] = sext i32 [[TMP1]] to i64
-// CHECK-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 4
+// CHECK-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 16
 // CHECK-NEXT:    [[TMP2:%.*]] = load i32, ptr [[ARRAYIDX4]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV5:%.*]] = sext i32 [[TMP2]] to i64
-// CHECK-NEXT:    [[ARRAYIDX7:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 16
+// CHECK-NEXT:    [[ARRAYIDX7:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 64
 // CHECK-NEXT:    [[TMP3:%.*]] = load i32, ptr [[ARRAYIDX7]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV8:%.*]] = sext i32 [[TMP3]] to i64
-// CHECK-NEXT:    [[ARRAYIDX10:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 25
+// CHECK-NEXT:    [[ARRAYIDX10:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 100
 // CHECK-NEXT:    [[TMP4:%.*]] = load i32, ptr [[ARRAYIDX10]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV11:%.*]] = sext i32 [[TMP4]] to i64
-// CHECK-NEXT:    [[ARRAYIDX13:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 36
+// CHECK-NEXT:    [[ARRAYIDX13:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 144
 // CHECK-NEXT:    [[TMP5:%.*]] = load i32, ptr [[ARRAYIDX13]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV14:%.*]] = sext i32 [[TMP5]] to i64
-// CHECK-NEXT:    [[ARRAYIDX16:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 49
+// CHECK-NEXT:    [[ARRAYIDX16:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 196
 // CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[ARRAYIDX16]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV17:%.*]] = sext i32 [[TMP6]] to i64
-// CHECK-NEXT:    [[ARRAYIDX19:%.*]] = getelementptr inbounds i32, ptr [[IN]], 
i64 64
+// CHECK-NEXT:    [[ARRAYIDX19:%.*]] = getelementptr inbounds i8, ptr [[IN]], 
i64 256
 // CHECK-NEXT:    [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX19]], align 4, !tbaa 
[[TBAA4]]
 // CHECK-NEXT:    [[CONV20:%.*]] = sext i32 [[TMP7]] to i64
 // CHECK-NEXT:    [[S_SROA_10_0_INSERT_EXT:%.*]] = zext i64 [[CONV20]] to i512
diff --git a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c 
b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
index 22e2e0c2ff102d..323afb64591249 100644
--- a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
+++ b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
@@ -30,21 +30,21 @@ DEFINE_STRUCT(bool)
 
 // CHECK-128-LABEL: @read_int64(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x i64>, ptr [[Y]], align 16, 
!tbaa [[TBAA2:![0-9]+]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> 
@llvm.vector.insert.nxv2i64.v2i64(<vscale x 2 x i64> undef, <2 x i64> [[TMP0]], 
i64 0)
 // CHECK-128-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_int64(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x i64>, ptr [[Y]], align 16, 
!tbaa [[TBAA2:![0-9]+]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> 
@llvm.vector.insert.nxv2i64.v4i64(<vscale x 2 x i64> undef, <4 x i64> [[TMP0]], 
i64 0)
 // CHECK-256-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_int64(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x i64>, ptr [[Y]], align 16, 
!tbaa [[TBAA2:![0-9]+]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> 
@llvm.vector.insert.nxv2i64.v8i64(<vscale x 2 x i64> undef, <8 x i64> [[TMP0]], 
i64 0)
 // CHECK-512-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
@@ -56,21 +56,21 @@ svint64_t read_int64(struct struct_int64 *s) {
 // CHECK-128-LABEL: @write_int64(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x i64> 
@llvm.vector.extract.v2i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    store <2 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_int64(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x i64> 
@llvm.vector.extract.v4i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    store <4 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_int64(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x i64> 
@llvm.vector.extract.v8i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    store <8 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -84,21 +84,21 @@ void write_int64(struct struct_int64 *s, svint64_t x) {
 
 // CHECK-128-LABEL: @read_float64(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x double>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> 
@llvm.vector.insert.nxv2f64.v2f64(<vscale x 2 x double> undef, <2 x double> 
[[TMP0]], i64 0)
 // CHECK-128-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_float64(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x double>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> 
@llvm.vector.insert.nxv2f64.v4f64(<vscale x 2 x double> undef, <4 x double> 
[[TMP0]], i64 0)
 // CHECK-256-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_float64(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x double>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> 
@llvm.vector.insert.nxv2f64.v8f64(<vscale x 2 x double> undef, <8 x double> 
[[TMP0]], i64 0)
 // CHECK-512-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
@@ -110,21 +110,21 @@ svfloat64_t read_float64(struct struct_float64 *s) {
 // CHECK-128-LABEL: @write_float64(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x double> 
@llvm.vector.extract.v2f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    store <2 x double> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_float64(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x double> 
@llvm.vector.extract.v4f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    store <4 x double> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_float64(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x double> 
@llvm.vector.extract.v8f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    store <8 x double> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -138,21 +138,21 @@ void write_float64(struct struct_float64 *s, svfloat64_t 
x) {
 
 // CHECK-128-LABEL: @read_bfloat16(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <8 x bfloat>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> 
@llvm.vector.insert.nxv8bf16.v8bf16(<vscale x 8 x bfloat> undef, <8 x bfloat> 
[[TMP0]], i64 0)
 // CHECK-128-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_bfloat16(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <16 x bfloat>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> 
@llvm.vector.insert.nxv8bf16.v16bf16(<vscale x 8 x bfloat> undef, <16 x bfloat> 
[[TMP0]], i64 0)
 // CHECK-256-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_bfloat16(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <32 x bfloat>, ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> 
@llvm.vector.insert.nxv8bf16.v32bf16(<vscale x 8 x bfloat> undef, <32 x bfloat> 
[[TMP0]], i64 0)
 // CHECK-512-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
@@ -164,21 +164,21 @@ svbfloat16_t read_bfloat16(struct struct_bfloat16 *s) {
 // CHECK-128-LABEL: @write_bfloat16(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x bfloat> 
@llvm.vector.extract.v8bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 16
 // CHECK-128-NEXT:    store <8 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_bfloat16(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <16 x bfloat> 
@llvm.vector.extract.v16bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 32
 // CHECK-256-NEXT:    store <16 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_bfloat16(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <32 x bfloat> 
@llvm.vector.extract.v32bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 64
 // CHECK-512-NEXT:    store <32 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, 
!tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -192,7 +192,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t 
x) {
 
 // CHECK-128-LABEL: @read_bool(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 2
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x i8>, ptr [[Y]], align 2, !tbaa 
[[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> 
@llvm.vector.insert.nxv2i8.v2i8(<vscale x 2 x i8> undef, <2 x i8> [[TMP0]], i64 
0)
 // CHECK-128-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> 
[[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -200,7 +200,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t 
x) {
 //
 // CHECK-256-LABEL: @read_bool(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 4
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x i8>, ptr [[Y]], align 2, !tbaa 
[[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> 
@llvm.vector.insert.nxv2i8.v4i8(<vscale x 2 x i8> undef, <4 x i8> [[TMP0]], i64 
0)
 // CHECK-256-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> 
[[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -208,7 +208,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t 
x) {
 //
 // CHECK-512-LABEL: @read_bool(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 8
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x i8>, ptr [[Y]], align 2, !tbaa 
[[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> 
@llvm.vector.insert.nxv2i8.v8i8(<vscale x 2 x i8> undef, <8 x i8> [[TMP0]], i64 
0)
 // CHECK-512-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> 
[[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -222,7 +222,7 @@ svbool_t read_bool(struct struct_bool *s) {
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to 
<vscale x 2 x i8>
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x i8> 
@llvm.vector.extract.v2i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 2
 // CHECK-128-NEXT:    store <2 x i8> [[CAST_FIXED]], ptr [[Y]], align 2, !tbaa 
[[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
@@ -230,7 +230,7 @@ svbool_t read_bool(struct struct_bool *s) {
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to 
<vscale x 2 x i8>
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x i8> 
@llvm.vector.extract.v4i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds 
[[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], 
i64 4
 // CHECK-256-NEXT:...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/68882
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to