https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114164

Hongtao Liu <liuhongt at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |liuhongt at gcc dot gnu.org

--- Comment #3 from Hongtao Liu <liuhongt at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #2)
> (In reply to Richard Biener from comment #1)
> > I'm not sure who's responsible to reject this, whether the vectorizer can
> > expect there's a way to create the mask arguments when the simdclone is
> > marked usable by the target or whether it has to verify that itself.
> > 
> > This becomes an ICE if we move vector lowering before vectorization.
> 
> Wasn't this valid when VEC_COND_EXPR allowed the comparison directly in the
> operand?
> Or maybe I misremember.  Certainly I believe -mavx -mno-avx2 should be able
> to do
> 256-bit conditional moves of float/double elements.

Here, mask is v4si which is 128-bit, and vector is v4df which is 256-bit
w/o avx512, x86 backend only supports vcond/vcond_mask with same size
(vcond{,_mask}v4sfv4si or vcond{,_mask}v4dfv4di), but not
vcond{,_mask}v4dfv4si.

BTW, we may get v4di mask from v4si mask by

        vshufps xmm1, xmm0, xmm0, 80            # xmm1 = xmm0[0,0,1,1]
        vshufps xmm0, xmm0, xmm0, 250           # xmm0 = xmm0[2,2,3,3]
        vinsertf128     ymm0, ymm1, xmm0, 1


under AVX, under AVX2 we can just use pmovsxdq

Reply via email to