Oh, Sorry. I made a mistake. It's -mrvv-vector-bits=2048 RVV clang doesn't 
vectorize it.

But ARM SVE GCC doesn't fix the issue if we specify -msve-vector-bits=2048, it 
will vectorize it.

https://godbolt.org/z/x7s8Kz87a

I guess LLVM has some magic in their cost model which can work well.



juzhe.zh...@rivai.ai
 
From: Robin Dapp
Date: 2024-01-11 19:15
To: juzhe.zh...@rivai.ai; Richard Biener
CC: rdapp.gcc; gcc-patches; kito.cheng; Kito.cheng; jeffreyalaw
Subject: Re: [PATCH] RISC-V: Increase scalar_to_vec_cost from 1 to 3
> I think we shouldn't vectorize it with any vlen, since the non-vectorized 
> codegen is much better.
> And also, I have tested -msve-vector-bits=2048, ARM SVE doesn't vectorize it.
> -zvl65536b, RVV Clang also doesn't vectorize it.
 
Of course I agree that optimizing everything to return 0 is
what should happen (tree-ssa-dom or vrp do that).  Unfortunately
they don't anymore after vectorizing the loop.
 
My point is cost comparison only has the scalar loop to compare
against which is:
 
li a5,1
li a3,19
.L2:
mv a4,a5
addiw a5,a5,1
bne a5,a3,.L2
 
That's effectively 2 * 18 instructions and more than what we get
when vectorizing - therefore it's kind totally outrageous to
vectorize here and we need to make sure not to go overboard with
costing just for this example.
 
How does aarch64's cost comparison look like?  What's, comparatively,
more expensive with their tuning?  I've seen scalar_to_vec = 4 and
vec_to_scalar = 4 but a regular operation is 2 already.   This
would equal scalar_to_vec = 2 for us (and is not sufficient) so
something else must come into play still.
 
Regards
Robin
 

Reply via email to