https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114427
Bug ID: 114427 Summary: [x86] ec_pack_truncv8si/v4si can be optimized with pblendw instead of pand for AVX2 target Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: liuhongt at gcc dot gnu.org Target Milestone: --- void foo (int* a, short* __restrict b, int* c) { for (int i = 0; i != 8; i++) b[i] = c[i] + a[i]; } gcc -O2 -march=x86-64-v3 -S mov eax, 65535 vmovd xmm0, eax vpbroadcastd xmm0, xmm0 vpand xmm2, xmm0, XMMWORD PTR [rdi+16] vpand xmm1, xmm0, XMMWORD PTR [rdi] vpackusdw xmm1, xmm1, xmm2 vpand xmm2, xmm0, XMMWORD PTR [rdx] vpand xmm0, xmm0, XMMWORD PTR [rdx+16] vpackusdw xmm0, xmm2, xmm0 vpaddw xmm0, xmm1, xmm0 vmovdqu XMMWORD PTR [rsi], xmm0 It can be better with below, vpxor %xmm0, %xmm0, %xmm0 vpblendw $85, 16(%rdi), %xmm0, %xmm2 vpblendw $85, (%rdi), %xmm0, %xmm1 vpackusdw %xmm2, %xmm1, %xmm1 vpblendw $85, (%rdx), %xmm0, %xmm2 vpblendw $85, 16(%rdx), %xmm0, %xmm0 vpackusdw %xmm0, %xmm2, %xmm0 vpaddw %xmm0, %xmm1, %xmm0 vmovdqu %xmm0, (%rsi) Currently, we're using (const_vector:v4si (const_int 0xffff) x4) as mask to clear upper 16 bits, but pblendw with zero vector can also be used, and zero vector is much cheaper than (const_vector:v4si (const_int 0xffff) x4) mov eax, 65535 vmovd xmm0, eax vpbroadcastd xmm0, xmm0 pblendw has same latency as pand, but could be a little bit worse from thoughput view(0.33->0.5 on ADL P-core, same on Zen4).